05. Feedforward Neural Network-Reminder

Feedforward Neural Network - A Reminder

The mathematical calculations needed for training RNN systems are fascinating. To deeply understand the process, we first need to feel confident with the vanilla FFNN system. We need to thoroughly understand the feedforward process, as well as the backpropagation process used in the training phases of such systems.
The next few videos will cover these topics, which you are already familiar with. We will address the feedforward process as well as backpropagation, using specific examples. These examples will serve as extra content to help further understand RNNs later in this lesson.

The following couple of videos will give you a brief overview of the Feedforward Neural Network (FFNN).

04 RNN FFNN Reminder A V7 Final

OK, you can take a small break now. We will continue with FFNN when you come back!

05 RNN FFNN Reminder B V6 Final

As mentioned before, when working with neural networks we have 2 primary phases:

Training

and

Evaluation.

During the training phase, we take the data set (also called the training set), which includes many pairs of inputs and their corresponding targets (outputs). Our goal is to find a set of weights that would best map the inputs to the desired outputs.
In the evaluation phase, we use the network that was created in the training phase, apply our new inputs and expect to obtain the desired outputs.

The training phase will include two steps:

Feedforward

and

Backpropagation

We will repeat these steps as many times as we need until we decide that our system has reached the best set of weights, giving us the best possible outputs.

The next two videos will focus on the feedforward process.

You will notice that in these videos I use subscripts as well as superscript as a numeric notation for the weight matrix.

For example:

  • W_k is weight matrix k
  • \ W_{ij}^k is the ij element of weight matrix k